13 research outputs found

    Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

    Full text link
    In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70% of cases with less than ten injection attempts. Even in black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15% of cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14% and leading to over 99% of attacks being detected after only two sample injections.Comment: 12 page

    GCNH: A Simple Method For Representation Learning On Heterophilous Graphs

    Full text link
    Graph Neural Networks (GNNs) are well-suited for learning on homophilous graphs, i.e., graphs in which edges tend to connect nodes of the same type. Yet, achievement of consistent GNN performance on heterophilous graphs remains an open research problem. Recent works have proposed extensions to standard GNN architectures to improve performance on heterophilous graphs, trading off model simplicity for prediction accuracy. However, these models fail to capture basic graph properties, such as neighborhood label distribution, which are fundamental for learning. In this work, we propose GCN for Heterophily (GCNH), a simple yet effective GNN architecture applicable to both heterophilous and homophilous scenarios. GCNH learns and combines separate representations for a node and its neighbors, using one learned importance coefficient per layer to balance the contributions of center nodes and neighborhoods. We conduct extensive experiments on eight real-world graphs and a set of synthetic graphs with varying degrees of heterophily to demonstrate how the design choices for GCNH lead to a sizable improvement over a vanilla GCN. Moreover, GCNH outperforms state-of-the-art models of much higher complexity on four out of eight benchmarks, while producing comparable results on the remaining datasets. Finally, we discuss and analyze the lower complexity of GCNH, which results in fewer trainable parameters and faster training times than other methods, and show how GCNH mitigates the oversmoothing problem.Comment: Accepted at 2023 International Joint Conference on Neural Networks (IJCNN

    Seeing Red: PPG Biometrics Using Smartphone Cameras

    Full text link
    In this paper, we propose a system that enables photoplethysmogram (PPG)-based authentication by using a smartphone camera. PPG signals are obtained by recording a video from the camera as users are resting their finger on top of the camera lens. The signals can be extracted based on subtle changes in the video that are due to changes in the light reflection properties of the skin as the blood flows through the finger. We collect a dataset of PPG measurements from a set of 15 users over the course of 6-11 sessions per user using an iPhone X for the measurements. We design an authentication pipeline that leverages the uniqueness of each individual's cardiovascular system, identifying a set of distinctive features from each heartbeat. We conduct a set of experiments to evaluate the recognition performance of the PPG biometric trait, including cross-session scenarios which have been disregarded in previous work. We found that when aggregating sufficient samples for the decision we achieve an EER as low as 8%, but that the performance greatly decreases in the cross-session scenario, with an average EER of 20%.Comment: 8 pages, 15th IEEE Computer Society Workshop on Biometrics 202

    It’s Always April Fools’ Day! On the Difficulty of Social Network Misinformation Classification via Propagation Features

    Get PDF
    Given the huge impact that Online Social Networks (OSN) had in the way people get informed and form their opinion, they became an attractive playground for malicious entities that want to spread misinformation, and leverage their effect. In fact, misinformation easily spreads on OSN and is a huge threat for modern society, possibly influencing also the outcome of elections, or even putting people’s life at risk (e.g., spreading “anti-vaccines” misinformation). Therefore, it is of paramount importance for our society to have some sort of “validation” on information spreading through OSN. The need for a wide-scale validation would greatly benefit from automatic tools. In this paper, we show that it is difficult to carry out an automatic classification of misinformation considering only structural properties of content propagation cascades. We focus on structural properties, because they would be inherently dif- ficult to be manipulated, with the the aim of circumventing classification systems. To support our claim, we carry out an extensive evaluation on Facebook posts belonging to conspiracy theories (as representative of misinformation), and scientific news (representative of fact-checked content). Our findings show that conspiracy content actually reverberates in a way which is hard to distinguish from the one scientific content does: for the classification mechanisms we investigated, classification F1-score never exceeds 0.65 during content propagation stages, and is still less than 0.7 even after propagation is complete

    2-hop Neighbor Class Similarity (2NCS): A graph structural metric indicative of graph neural network performance

    Full text link
    Graph Neural Networks (GNNs) achieve state-of-the-art performance on graph-structured data across numerous domains. Their underlying ability to represent nodes as summaries of their vicinities has proven effective for homophilous graphs in particular, in which same-type nodes tend to connect. On heterophilous graphs, in which different-type nodes are likely connected, GNNs perform less consistently, as neighborhood information might be less representative or even misleading. On the other hand, GNN performance is not inferior on all heterophilous graphs, and there is a lack of understanding of what other graph properties affect GNN performance. In this work, we highlight the limitations of the widely used homophily ratio and the recent Cross-Class Neighborhood Similarity (CCNS) metric in estimating GNN performance. To overcome these limitations, we introduce 2-hop Neighbor Class Similarity (2NCS), a new quantitative graph structural property that correlates with GNN performance more strongly and consistently than alternative metrics. 2NCS considers two-hop neighborhoods as a theoretically derived consequence of the two-step label propagation process governing GCN's training-inference process. Experiments on one synthetic and eight real-world graph datasets confirm consistent improvements over existing metrics in estimating the accuracy of GCN- and GAT-based architectures on the node classification task.Comment: Accepted at the 3rd Workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 202

    They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors

    Full text link
    In this paper, we describe how the electronic rolling shutter in CMOS image sensors can be exploited using a bright, modulated light source (e.g., an inexpensive, off-the-shelf laser), to inject fine-grained image disruptions. We demonstrate the attack on seven different CMOS cameras, ranging from cheap IoT to semi-professional surveillance cameras, to highlight the wide applicability of the rolling shutter attack. We model the fundamental factors affecting a rolling shutter attack in an uncontrolled setting. We then perform an exhaustive evaluation of the attack's effect on the task of object detection, investigating the effect of attack parameters. We validate our model against empirical data collected on two separate cameras, showing that by simply using information from the camera's datasheet the adversary can accurately predict the injected distortion size and optimize their attack accordingly. We find that an adversary can hide up to 75% of objects perceived by state-of-the-art detectors by selecting appropriate attack parameters. We also investigate the stealthiness of the attack in comparison to a na\"{i}ve camera blinding attack, showing that common image distortion metrics can not detect the attack presence. Therefore, we present a new, accurate and lightweight enhancement to the backbone network of an object detector to recognize rolling shutter attacks. Overall, our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.Comment: 15 pages, 15 figure

    SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

    Full text link
    Research into adversarial examples (AE) has developed rapidly, yet static adversarial patches are still the main technique for conducting attacks in the real world, despite being obvious, semi-permanent and unmodifiable once deployed. In this paper, we propose Short-Lived Adversarial Perturbations (SLAP), a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector. Attackers can project a specifically crafted adversarial perturbation onto a real-world object, transforming it into an AE. This allows the adversary greater control over the attack compared to adversarial patches: (i) projections can be dynamically turned on and off or modified at will, (ii) projections do not suffer from the locality constraint imposed by patches, making them harder to detect. We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks, focusing on the detection of stop signs. We conduct experiments in a variety of ambient light conditions, including outdoors, showing how in non-bright settings the proposed method generates AE that are extremely robust, causing misclassifications on state-of-the-art networks with up to 99% success rate for a variety of angles and distances. We also demostrate that SLAP-generated AE do not present detectable behaviours seen in adversarial patches and therefore bypass SentiNet, a physical AE detection method. We evaluate other defences including an adaptive defender using adversarial learning which is able to thwart the attack effectiveness up to 80% even in favourable attacker conditions.Comment: 13 pages, to be published in Usenix Security 2021, project page https://github.com/ssloxford/short-lived-adversarial-perturbation

    When your fitness tracker betrays you: quantifying the predictability of biometric features across contexts

    No full text
    This is the dataset collected for the 2018 IEEE S&P paper "When Your Fitness Tracker Betrays You: Quantifying the Predictability of Biometric Features Across Contexts". We provide .zip files for each individual biometric and a readme file that describes the data format and structure. If you use any of the data, please cite the original paper as follows: @inproceedings{seberz2018, title={When Your Fitness Tracker Betrays You: Quantifying the Predictability of Biometric Features Across Contexts}, author={Eberz, Simon and Lovisotto, Giulio and Patan\`e, Andrea and Kwiatkowska, Marta and Lenders, Vincent and Martinovic, Ivan}, booktitle={Proceedings of the 2018 IEEE Symposium on Security and Privacy}, year={2018}, organization={IEEE}

    When your fitness tracker betrays you: quantifying the predictability of biometric features across contexts

    No full text
    This is the dataset collected for the 2018 IEEE S&P paper "When Your Fitness Tracker Betrays You: Quantifying the Predictability of Biometric Features Across Contexts". We provide .zip files for each individual biometric and a readme file that describes the data format and structure. If you use any of the data, please cite the original paper as follows: @inproceedings{seberz2018, title={When Your Fitness Tracker Betrays You: Quantifying the Predictability of Biometric Features Across Contexts}, author={Eberz, Simon and Lovisotto, Giulio and Patan\`e, Andrea and Kwiatkowska, Marta and Lenders, Vincent and Martinovic, Ivan}, booktitle={Proceedings of the 2018 IEEE Symposium on Security and Privacy}, year={2018}, organization={IEEE}
    corecore